end-to-end learning and control framework
Pontryagin Differentiable Programming: An End-to-End Learning and Control Framework
This paper develops a Pontryagin differentiable programming (PDP) methodology, which establishes a unified framework to solve a broad class of learning and control tasks. The PDP distinguishes from existing methods by two novel techniques: first, we differentiate through Pontryagin's Maximum Principle, and this allows to obtain the analytical derivative of a trajectory with respect to tunable parameters within an optimal control system, enabling end-to-end learning of dynamics, policies, or/and control objective functions; and second, we propose an auxiliary control system in the backward pass of the PDP framework, and the output of this auxiliary control system is the analytical derivative of the original system's trajectory with respect to the parameters, which can be iteratively solved using standard control tools. We investigate three learning modes of the PDP: inverse reinforcement learning, system identification, and control/planning. We demonstrate the capability of the PDP in each learning mode on different high-dimensional systems, including multilink robot arm, 6-DoF maneuvering UAV, and 6-DoF rocket powered landing.
Review for NeurIPS paper: Pontryagin Differentiable Programming: An End-to-End Learning and Control Framework
Weaknesses: The presentation of the unified framework suffers from some flaws in the mathematical exposition, and could be greatly improved. In particular, the casual presentation of Equation (2) makes it difficult to parse. This casual presentation also (a) does not make it obvious that the later equations (e.g. Eq (3)) are actually bi-level optimization frameworks, which is what makes the contribution of the proposed paper difficult/novel, and (b) makes some of the definitions building off of Eq (2) also imprecise or hard to understand. Please see the "additional feedback" section for specific suggestions as to how to clarify this framework. As this conceptual framework is a major contribution of the paper, it is critical that the authors improve the presentation.
Review for NeurIPS paper: Pontryagin Differentiable Programming: An End-to-End Learning and Control Framework
This paper proposes a unified framework for solving inverse RL, system identification, and optimal control problems, using implicit differentiation through Pontryagin's Maximum Principle (PMP). The paper applies this framework to imitation learning, system identification, and optimal control tasks in four experimental domains, and shows improved performance compared to other methods. According to R1, who has a more traditional control theory background, in their effort to present this unified view of these problems, the authors make the implicit claim that the method is equally useful in all three modes: system identification, optimal control, and inverse optimal control. In reality, the proposed method presents an inductive bias from the PMP that is most useful for the first and third modes (SysID and IOC), where the current dominant approach is the inverse KKT method from Toussaint and others and bespoke SysID methods. In terms of optimal control, the proposed unification seems more of an abstraction and interpretation, but does not offer a new method.
Pontryagin Differentiable Programming: An End-to-End Learning and Control Framework
This paper develops a Pontryagin differentiable programming (PDP) methodology, which establishes a unified framework to solve a broad class of learning and control tasks. The PDP distinguishes from existing methods by two novel techniques: first, we differentiate through Pontryagin's Maximum Principle, and this allows to obtain the analytical derivative of a trajectory with respect to tunable parameters within an optimal control system, enabling end-to-end learning of dynamics, policies, or/and control objective functions; and second, we propose an auxiliary control system in the backward pass of the PDP framework, and the output of this auxiliary control system is the analytical derivative of the original system's trajectory with respect to the parameters, which can be iteratively solved using standard control tools. We investigate three learning modes of the PDP: inverse reinforcement learning, system identification, and control/planning. We demonstrate the capability of the PDP in each learning mode on different high-dimensional systems, including multilink robot arm, 6-DoF maneuvering UAV, and 6-DoF rocket powered landing.